74 research outputs found

    Reinforcement learning in populations of spiking neurons

    Get PDF
    Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning

    Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    Get PDF
    The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled activation subunits, while the DA was modeled using uncoupled activation subunits. Implementations of DA with coupled subunits, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable - allowing an easy and efficient DA implementation. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods.Comment: 32 text pages, 10 figures, 1 supplementary text + figur

    Accurate path integration in continuous attractor network models of grid cells

    Get PDF
    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of ~10–100 meters and ~1–10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other

    The Song Must Go On: Resilience of the Songbird Vocal Motor Pathway

    Get PDF
    Stereotyped sequences of neural activity underlie learned vocal behavior in songbirds; principle neurons in the cortical motor nucleus HVC fire in stereotyped sequences with millisecond precision across multiple renditions of a song. The geometry of neural connections underlying these sequences is not known in detail though feed-forward chains are commonly assumed in theoretical models of sequential neural activity. In songbirds, a well-defined cortical-thalamic motor circuit exists but little is known the fine-grain structure of connections within each song nucleus. To examine whether the structure of song is critically dependent on long-range connections within HVC, we bilaterally transected the nucleus along the anterior-posterior axis in normal-hearing and deafened birds. The disruption leads to a slowing of song as well as an increase in acoustic variability. These effects are reversed on a time-scale of days even in deafened birds or in birds that are prevented from singing post-transection. The stereotyped song of zebra finches includes acoustic details that span from milliseconds to seconds–one of the most precise learned behaviors in the animal kingdom. This detailed motor pattern is resilient to disruption of connections at the cortical level, and the details of song variability and duration are maintained by offline homeostasis of the song circuit

    Differential influences of environment and self-motion on place and grid cell firing

    Get PDF
    Place and grid cells in the hippocampal formation provide foundational representations of environmental location, and potentially of locations within conceptual spaces. Some accounts predict that environmental sensory information and self-motion are encoded in complementary representations, while other models suggest that both features combine to produce a single coherent representation. Here, we use virtual reality to dissociate visual environmental from physical motion inputs, while recording place and grid cells in mice navigating virtual open arenas. Place cell firing patterns predominantly reflect visual inputs, while grid cell activity reflects a greater influence of physical motion. Thus, even when recorded simultaneously, place and grid cell firing patterns differentially reflect environmental information (or ‘states’) and physical self-motion (or ‘transitions’), and need not be mutually coherent

    Support for a synaptic chain model of neuronal sequence generation

    Get PDF
    In songbirds, the remarkable temporal precision of song is generated by a sparse sequence of bursts in the premotor nucleus HVC. To distinguish between two possible classes of models of neural sequence generation, we carried out intracellular recordings of HVC neurons in singing zebra finches (Taeniopygia guttata). We found that the subthreshold membrane potential is characterized by a large, rapid depolarization 5–10 ms before burst onset, consistent with a synaptically connected chain of neurons in HVC. We found no evidence for the slow membrane potential modulation predicted by models in which burst timing is controlled by subthreshold dynamics. Furthermore, bursts ride on an underlying depolarization of ~10-ms duration, probably the result of a regenerative calcium spike within HVC neurons that could facilitate the propagation of activity through a chain network with high temporal precision. Our results provide insight into the fundamental mechanisms by which neural circuits can generate complex sequential behaviours.National Institutes of Health (U.S.) (Grant MH067105)National Institutes of Health (U.S.) (Grant DC009280)National Science Foundation (U.S.) (IOS-0827731)Alfred P. Sloan Foundation (Research Fellowship

    Robustness of Learning That Is Based on Covariance-Driven Synaptic Plasticity

    Get PDF
    It is widely believed that learning is due, at least in part, to long-lasting modifications of the strengths of synapses in the brain. Theoretical studies have shown that a family of synaptic plasticity rules, in which synaptic changes are driven by covariance, is particularly useful for many forms of learning, including associative memory, gradient estimation, and operant conditioning. Covariance-based plasticity is inherently sensitive. Even a slight mistuning of the parameters of a covariance-based plasticity rule is likely to result in substantial changes in synaptic efficacies. Therefore, the biological relevance of covariance-based plasticity models is questionable. Here, we study the effects of mistuning parameters of the plasticity rule in a decision making model in which synaptic plasticity is driven by the covariance of reward and neural activity. An exact covariance plasticity rule yields Herrnstein's matching law. We show that although the effect of slight mistuning of the plasticity rule on the synaptic efficacies is large, the behavioral effect is small. Thus, matching behavior is robust to mistuning of the parameters of the covariance-based plasticity rule. Furthermore, the mistuned covariance rule results in undermatching, which is consistent with experimentally observed behavior. These results substantiate the hypothesis that approximate covariance-based synaptic plasticity underlies operant conditioning. However, we show that the mistuning of the mean subtraction makes behavior sensitive to the mistuning of the properties of the decision making network. Thus, there is a tradeoff between the robustness of matching behavior to changes in the plasticity rule and its robustness to changes in the properties of the decision making network

    An Efficient Coding Hypothesis Links Sparsity and Selectivity of Neural Responses

    Get PDF
    To what extent are sensory responses in the brain compatible with first-order principles? The efficient coding hypothesis projects that neurons use as few spikes as possible to faithfully represent natural stimuli. However, many sparsely firing neurons in higher brain areas seem to violate this hypothesis in that they respond more to familiar stimuli than to nonfamiliar stimuli. We reconcile this discrepancy by showing that efficient sensory responses give rise to stimulus selectivity that depends on the stimulus-independent firing threshold and the balance between excitatory and inhibitory inputs. We construct a cost function that enforces minimal firing rates in model neurons by linearly punishing suprathreshold synaptic currents. By contrast, subthreshold currents are punished quadratically, which allows us to optimally reconstruct sensory inputs from elicited responses. We train synaptic currents on many renditions of a particular bird's own song (BOS) and few renditions of conspecific birds' songs (CONs). During training, model neurons develop a response selectivity with complex dependence on the firing threshold. At low thresholds, they fire densely and prefer CON and the reverse BOS (REV) over BOS. However, at high thresholds or when hyperpolarized, they fire sparsely and prefer BOS over REV and over CON. Based on this selectivity reversal, our model suggests that preference for a highly familiar stimulus corresponds to a high-threshold or strong-inhibition regime of an efficient coding strategy. Our findings apply to songbird mirror neurons, and in general, they suggest that the brain may be endowed with simple mechanisms to rapidly change selectivity of neural responses to focus sensory processing on either familiar or nonfamiliar stimuli. In summary, we find support for the efficient coding hypothesis and provide new insights into the interplay between the sparsity and selectivity of neural responses
    corecore